pearson correlation
Appendix Figure A.1: Input spikes. A. The input spikes, x
They are 300 Poisson neurons, where the first 100 encode the whisker stimulus, the next 100 encode the auditory cue and the last 100 act as an extra noise source for our model. Out of the 300 neurons, 60 of them are inhibitory (red). The input neurons project unrestrictedly to the whole RSNN. The baseline firing rate of all input neurons is 5 Hz. The whisker stimulus and auditory cue are encoded with an increase of the firing rate for 10 ms, starting 4 ms after the onset of the actual stimuli.
Trial matching: capturing variability with data-constrained spiking neural networks
Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a mouse cortical sensory-motor pathway in a tactile detection task reported by licking with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > North Carolina (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Virginia (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology (0.45)
- Banking & Finance (0.45)
- North America > United States (0.28)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (2 more...)
- Banking & Finance (0.71)
- Media > Film (0.48)
- Leisure & Entertainment (0.48)
Sparse Learning with CART
Decision trees with binary splits are popularly constructed using Classification and Regression Trees (CART) methodology. For regression models, this approach recursively divides the data into two near-homogenous daughter nodes according to a split point that maximizes the reduction in sum of squares error (the impurity) along a particular variable. This paper aims to study the statistical properties of regression trees constructed with CART. In doing so, we find that the training error is governed by the Pearson correlation between the optimal decision stump and response data in each node, which we bound by constructing a prior distribution on the split points and solving a nonlinear optimization problem. We leverage this connection between the training error and Pearson correlation to show that CART with cost-complexity pruning achieves an optimal complexity/goodness-of-fit tradeoff when the depth scales with the logarithm of the sample size. Data dependent quantities, which adapt to the dimensionality and latent structure of the regression model, are seen to govern the rates of convergence of the prediction error.